Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🛡️ AI Security
Model Poisoning, Adversarial Examples, Prompt Injection, AI Safety
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
29742
posts in
59.8
ms
AI Security in the Foundation Model Era: A Comprehensive Survey from a
Unified
Perspective
🛡️
AI Safety
arxiv.org
·
6d
·
…
DORA
& Threat Lead
Penetration
Testing with Marko
🚩
CTF Writeups
terokarvinen.com
·
2d
·
…
Claude.ai Prompt
Injection
Vulnerability
💉
Prompt Injection
oasis.security
·
6h
·
Hacker News
·
…
Anthropic’s rough week: leaked models, exposed source code, and a
botched
GitHub
takedown
🛡️
Anthropic PBC
thenewstack.io
·
2h
·
…
Ensuring
Trustworthiness
of AI-Enhanced Embedded Systems
🛡️
AI Safety
semiengineering.com
·
1d
·
…
Going out with a
whimper
🛡️
AI Safety
lesswrong.com
·
21h
·
…
Show r/programming: We
scanned
10K AI tool skills for code exploits and prompt injection Title: Show r/programming: We
scanned
10K AI tool skills for code
explo
...
🕳
LLM Vulnerabilities
safeskill.dev
·
5d
·
r/programming
·
…
AI
blueprints
can be stolen with a single small
antenna
🆕
New AI
techxplore.com
·
1d
·
…
The $665 million crypto fund
roiling
the AI safety fight
💳
Content Monetization
politico.com
·
3h
·
…
AI Security Best
Practices
Guide
🛡️
AI Safety
datadoghq.com
·
3d
·
r/programming
·
…
I gave a short high-level overview talk about the
intersection
between economics and AI safety last week. Here are some highlights and
musings
, and the actual s...
🛡️
AI Safety
threadreaderapp.com
·
2d
·
…
Ask HN: How are you
handling
Japanese prompt
injection
in LLM apps?
🕳
LLM Vulnerabilities
news.ycombinator.com
·
1d
·
Hacker News
·
…
State of AI safety: as capabilities grow and models can monitor other models, issues like adversarial
robustness
persist
and society is still not ready for AI (...
🛡️
AI Safety
techmeme.com
·
3d
·
…
Project Mario: How
DeepMind
tried to secure
independence
from Google
🇨🇳
Chinese AI
colossus.com
·
1d
·
Hacker News
·
…
The state of AI safety in four
fake
graphs
🛡️
AI Safety
windowsontheory.org
·
3d
·
Hacker News
,
r/artificial
·
…
Australian government and
Anthropic
sign
MOU
for AI safety and research
🛡️
Anthropic PBC
anthropic.com
·
2d
·
Hacker News
·
…
The
Persistent
Vulnerability of
Aligned
AI Systems
🛡️
AI Safety
arxiv.org
·
18h
·
…
Prompt injection is a lot like SQL injection: take
untrusted
data,
shove
it into a data stream that uses in-band signaling, and hope for the best. A common a...
💉
Prompt Injection
honeypot.net
·
5d
·
…
Indirect
Prompt
Injection
Defense
💉
Prompt Injection
idpishield.com
·
4d
·
Hacker News
·
…
Show HN: Prompt Injection
Experiments
in OpenClaw with
Opus4.6
💉
Prompt Injection
veganmosfet.codeberg.page
·
6d
·
Hacker News
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help